We present DyFOS, an active perception method that Dynamically Finds Optimal States to minimize localization error while avoiding obstacles and occlusions. We consider the scenario where a ground target without any exteroceptive sensors must rely on an aerial observer for pose and uncertainty estimates to localize itself along an obstacle-filled path. The observer uses a downward-facing camera to estimate the target's pose and uncertainty. However, the pose uncertainty is a function of the states of the observer, target, and surrounding environment. To find an optimal state that minimizes the target's localization uncertainty, DyFOS uses a localization error prediction pipeline in an optimization search. Given the states mentioned above, the pipeline predicts the target's localization uncertainty with the help of a trained, complex state-dependent sensor measurement model (which is a probabilistic neural network in our case). Our pipeline also predicts target occlusion and obstacle collision to remove undesirable observer states. The output of the optimization search is an optimal observer state that minimizes target localization uncertainty while avoiding occlusion and collision. We evaluate the proposed method using numerical and simulated (Gazebo) experiments. Our results show that DyFOS is almost 100x faster than yet as good as brute force. Furthermore, DyFOS yielded lower localization errors than random and heuristic searches.
translated by 谷歌翻译
Green Security Games with real-time information (GSG-I) add the real-time information about the agents' movement to the typical GSG formulation. Prior works on GSG-I have used deep reinforcement learning (DRL) to learn the best policy for the agent in such an environment without any need to store the huge number of state representations for GSG-I. However, the decision-making process of DRL methods is largely opaque, which results in a lack of trust in their predictions. To tackle this issue, we present an interpretable DRL method for GSG-I that generates visualization to explain the decisions taken by the DRL algorithm. We also show that this approach performs better and works well with a simpler training regimen compared to the existing method.
translated by 谷歌翻译
多机器人覆盖计划问题的集中式方法缺乏可扩展性。基于学习的分布式算法除了将面向数据的功能生成功能带入表格外,还提供了可扩展的途径,从而允许与其他基于学习的方法集成。为此,我们提出了一个基于学习的,可区分的分布式覆盖范围计划(D2COPL A N),该计划者与专家算法相比在运行时和代理数量上有效地扩展,并与经典分布式算法相同。此外,我们表明D2Coplan可以与其他学习方法无缝地结合到端到端的学习方法,从而提供了比单独训练的模块更好的解决方案,从而打开了进一步的研究,以进一步研究以经典方法难以捉摸的任务。
translated by 谷歌翻译
我们研究了合作航空航天车辆路线应用程序的资源分配问题,其中多个无人驾驶汽车(UAV)电池容量有限和多个无人接地车辆(UGV),这也可以充当移动充电站,需要共同实现诸如持续监视一组要点之类的任务。由于无人机的电池能力有限,他们有时必须偏离任务才能与UGV进行集合并得到充电。每个UGV一次可以一次提供有限数量的无人机。与确定性多机器人计划的先前工作相反,我们考虑了无人机能源消耗的随机性所带来的挑战。我们有兴趣找到无人机的最佳充电时间表,从而最大程度地减少了旅行成本,并且在计划范围内没有任何无人机在计划范围内取消收费的可能性大于用户定义的公差。我们将此问题({风险意识召集集合问题(RRRP))}作为整数线性程序(ILP),其中匹配的约束捕获资源可用性约束,而背包约束捕获了成功概率约束。我们提出了一种求解RRRP的双晶格近似算法。在一个持续监测任务的背景下,我们证明了我们的制定和算法的有效性。
translated by 谷歌翻译
在本文中,我们提出了一种新颖的重尾随机策略梯度(HT-PSG)算法,以应对连续控制问题中稀疏奖励的挑战。稀疏的奖励在连续控制机器人技术任务(例如操纵和导航)中很常见,并且由于对状态空间的价值功能的非平凡估计而使学习问题变得困难。这需要奖励成型或针对稀疏奖励环境的专家演示。但是,获得高质量的演示非常昂贵,有时甚至是不可能的。我们提出了一个重型策略参数化,以及基于动量的策略梯度跟踪方案(HT-SPG),以引起对算法的稳定探索行为。提出的算法不需要访问专家演示。我们测试了HT-SPG在连续控制的各种基准测试任务上的性能,并具有稀疏的奖励,例如1d Mario,病理山车,Openai体育馆的稀疏摆和稀疏的Mujoco环境(Hopper-V2)。就高平均累积奖励而言,我们在所有任务中表现出一致的性能提高。 HT-SPG还证明了最低样品的收敛速度提高,从而强调了我们提出的算法的样品效率。
translated by 谷歌翻译
监测草原的健康和活力对于告知管理决策至关优化农业应用中的旋转放牧的态度至关重要。为了利用饲料资源,提高土地生产力,我们需要了解牧场的增长模式,这在最先进的状态下即可。在本文中,我们建议部署一个机器人团队来监测一个未知的牧场环境的演变,以实现上述目标。为了监测这种环境,通常会缓慢发展,我们需要设计一种以低成本在大面积上快速评估环境的策略。因此,我们提出了一种集成管道,包括数据综合,深度神经网络训练和预测以及一个间歇地监测牧场的多机器人部署算法。具体而言,使用与ROS Gazebo的新型数据综合耦合的专家知识的农业数据,我们首先提出了一种新的神经网络架构来学习环境的时空动态。这种预测有助于我们了解大规模上的牧场增长模式,并为未来做出适当的监测决策。基于我们的预测,我们设计了一个用于低成本监控的间歇多机器人部署策略。最后,我们将提议的管道与其他方法进行比较,从数据综合到预测和规划,以证实我们的管道的性能。
translated by 谷歌翻译
分散的多机器人目标跟踪的问题要求共同选择动作,例如运动原语,以使机器人通过本地通信最大化目标跟踪性能。实施实施的一个主要挑战是使目标跟踪方法可扩展到大规模的问题实例。在这项工作中,我们提出了通用学习体系结构,以通过分散的通信进行大规模的协作目标跟踪。特别是,我们的学习体系结构利用图形神经网络(GNN)捕获机器人的本地互动,并学习机器人的分散决策。我们通过模仿专家解决方案来训练学习模型,并实施仅涉及本地观察和沟通的分散行动选择的最终模型。我们在使用大型机器人网络的主动目标跟踪方案中演示了基于GNN的学习方法的性能。仿真结果表明,我们的方法几乎与专家算法的跟踪性能相匹配,但最多可以使用多达100个机器人运行多个订单。此外,它的表现略高于分散的贪婪算法,但运行速度更快(尤其是20多个机器人)。结果还显示了我们在以前看不见的情况下的方法的概括能力,例如,较大的环境和较大的机器人网络。
translated by 谷歌翻译
多路径定向问题询问机器人团队的路径最大化收集的总奖励,同时满足路径长度上的预算约束。这个问题模拟了许多多机器人路由任务,例如探索未知的环境和环境监控信息。在本文中,我们专注于如何使机器人团队在对抗环境中运行时对故障的强大。我们介绍了强大的多路径定向事问题(RMOP),在那里我们寻求最糟糕的案例保证,反对能够在大多数$ \ Alpha $机器人处攻击的对手。我们考虑两个问题的两个版本:RMOP离线和RMOP在线。在离线版本中,当机器人执行其计划时,没有通信或重新扫描,我们的主要贡献是一种具有界限近似保证的一般近似方案,其取决于$ \ alpha $和单个机器人导向的近似因子。特别是,我们表明该算法在成本函数是模块化时产生(i)恒因子近似; (ii)在成本函数是子模具时,$ \ log $因子近似; (iii)当成本函数是子模块时的恒因子近似,但是允许机器人通过有界金额超过其路径预算。在在线版本中,RMOP被建模为双人顺序游戏,并基于蒙特卡罗树搜索(MCT),以后退地平线方式自适应解决。除了理论分析之外,我们还对海洋监测和隧道信息收集应用进行仿真研究,以证明我们的方法的功效。
translated by 谷歌翻译
Dependency hell is a well-known pain point in the development of large software projects and machine learning (ML) code bases are not immune from it. In fact, ML applications suffer from an additional form, namely, "data source dependency hell". This term refers to the central role played by data and its unique quirks that often lead to unexpected failures of ML models which cannot be explained by code changes. In this paper, we present an automated dependency mapping framework that allows MLOps engineers to monitor the whole dependency map of their models in a fast paced engineering environment and thus mitigate ahead of time the consequences of any data source changes (e.g., re-train model, ignore data, set default data etc.). Our system is based on a unified and generic approach, employing techniques from static analysis, from which data sources can be identified reliably for any type of dependency on a wide range of source languages and artefacts. The dependency mapping framework is exposed as a REST web API where the only input is the path to the Git repository hosting the code base. Currently used by MLOps engineers at Microsoft, we expect such dependency map APIs to be adopted more widely by MLOps engineers in the future.
translated by 谷歌翻译
Training effective embodied AI agents often involves manual reward engineering, expert imitation, specialized components such as maps, or leveraging additional sensors for depth and localization. Another approach is to use neural architectures alongside self-supervised objectives which encourage better representation learning. In practice, there are few guarantees that these self-supervised objectives encode task-relevant information. We propose the Scene Graph Contrastive (SGC) loss, which uses scene graphs as general-purpose, training-only, supervisory signals. The SGC loss does away with explicit graph decoding and instead uses contrastive learning to align an agent's representation with a rich graphical encoding of its environment. The SGC loss is generally applicable, simple to implement, and encourages representations that encode objects' semantics, relationships, and history. Using the SGC loss, we attain significant gains on three embodied tasks: Object Navigation, Multi-Object Navigation, and Arm Point Navigation. Finally, we present studies and analyses which demonstrate the ability of our trained representation to encode semantic cues about the environment.
translated by 谷歌翻译